Turning Fasteners Around

Using Restyle to add realistic surface characteristics to fasteners of various materials.

Real-time View screenshot

Brass
Restyle image applied as camera mapped texture on Flat material. White background with Occlusion Ground Shadows.

Stainless steel

Galvanized steel

Blue Anodized Aluminum

Black Oxide Steel

Chrome Plated

Rusted Steel

White Nylon

Prompt used for the brass version:

A precision machined brass thumbscrew or knurled bolt photographed in a professional photo studio. The part features a large knurled head with fine diagonal cross-hatching texture and a threaded shaft. The brass has a warm golden color with a spun finish showing subtle circular machining marks and realistic surface imperfections including minor scratches, tool marks, and slight oxidation patina. The lighting is professional studio photography with soft shadows, shot against a clean white seamless background. High detail macro photography style, shallow depth of field, commercial product photography aesthetic. The brass surface shows natural aging and use marks typical of machined metal parts.

The prompts for the other material variations were transformed from the brass prompt.

3 Likes

This is pretty neat. I think that once again, the weak link in the AI stuff is the human- Prompt Training for anyone looking to get any decent results out of any AI engine for any use is a real need. I look at my own results and they are subpar and lacking, but then I realize my prompts are 20 words in comparison to this type of prompt which is so much more detail and formatted properly.

If I’m correct KeyShot AI Shots is based on FLUX.1 which in short understands normal sentences and works less well with the kind of keyword-prompts people used in the SDXL and earlier versions AI image generation.

To get the right prompt for what you want you can also ask CoPilot or any other AI to write it for you telling it needs to be used for FLUX.1 or you can for example throw in an image if that has the texture/looks you like and ask CoPilot to describe it so FLUX.1 understands it.

I think it’s pretty much a waste of time to get trained in writing prompts since AI can handle it very well. Now it’s FLUX.1 and next year it might be another model with completely different needs.

As always practice makes perfect and experimenting a bit which will give you the best results always helps. That’s also why I would love the images + prompts in KeyShot to be stored anyway so you can review what worked earlier in a similar situation.

Another thing which is good to consider is the fact that it will only ā€˜read’ so much information from a prompt. I think Flux does 256 tokens at max but others say 512 so I’m not sure. While you can throw really long texts at it, it won’t read all and from my own testing I don’t think it just stops reading but the process is a bit more complicated in ā€˜how’ it reads a prompt.

4 Likes

Exactly what Oscar said. If there’s one thing that LLMs are brilliant at then it’s generating structured text. Which makes prompt generation an excellent use case for LLMs.
These tools are quickly becoming like search engines. Soon everyone will be using these as just another essential tool to get work done.

That all makes sense, and the irony of using AI to write a prompt for AI is not lost on me :slight_smile:

2 Likes

Haha, it is ironic. Must say there are still some nice documents on how you can structure things a bit. Compared to the older tools Flux is pretty capable of creating complex scenes with lot of detail.

Not too long ago I came across this guide which has some nice tips on how you can get nice results with flux and it’s a nice read since it also gives a bit of insight on how it will interpret the prompt. I thought the way how you build a complex image with describing the layers in an image a nice one. Compared to older AI generation this is so much smarter, the older ones always got really confused if you wanted to combine multiple ideas.

FLUX.1 Prompt Guide

1 Like

It can definitely feel like a game of inception. But you as a user are still in control of directing the prompt and feeding the inputs. The best prompts I’ve made have been the result of some dialog and having a back-and-forth with an LLM. Trying out a first proposal and then making adjustments, putting the emphasis elsewhere and restructuring.
In this iteration of AI Shots, the most important takeaway for users is definitely that well-crafted and sprawling prompts will get you the best results. Simple prompts lead to simple, and frankly unimpressive, results. Simple prompts will leave lots of parameters up for interpretation (some call it hallucination). With detailed prompts, you can actually achieve very decent consistency between subsequent image generations, as you leave less room for free interpretation. You will also get the mood, lighting, materials etc. you intend without the image generator ā€œslackingā€.

Prompt engineering is (to be frank) a pain. Whilst all these images look great the question I have to ask is for this type of application is it actually more efficient than having a decent VISUAL interface to lighting and material specification?
Those of us that grew up with physical rendering understand the terminology so can make rapid tweaks to settings in realtime.
With AI prompting you don’t get that. It is akin to the old render days of apply settings, hit render, wait hours, further tweaking and repeat.
The whole ethos of Keyshot is that it is a visual tool with realtime feedback

Lose that, and you lose the reason to use it. You might as well switch to one of the (many) new generation AI only tools.

I was able to spend more time playing with the AI features in KS, and while it pains me to say this, its nearly worthless to my workflow. I have no need to restyle any of my products into a steampunk version, or let any AI decide what creative direction we should go. This is already decided in our creative meetings, and is at the behest of our creative director and he’s not going to accept (nor should he) any feedback from AI. Our look and feel of the images we use are carefully art directed and specific, something that AI is not capable of, and iterations are random and uncontrollable.

The back plate functions is what I was most excited about, but again, its not art directable. It is not following my guidance, and no matter the words or the prompts i use, it does not follow the camera angles, lens settings, zoom settings, etc. It gives me the wrong perspectives, wide angle vs tele, etc. I cannot tune my prompts to match the geometry. Also, the fact that it bakes the render into the background is not great, it very much limits what can be done after the render. Its too random.

What appears to have happend here, which dissapoints me a bit is that KS bought into the AI buzz and grabbed someone’s prefabbed AI and put KS clothes on it. Its just not that useful. I can’t imagine (at least not yet) that any designer worth their salary is going to use it in production. I can’t imagine in a year going back into a project that had AI elements that was made previously and need updates or changes that anything will look the same or get the same results if the AI elements need to be tweaked.

AI image generation just doesn’t have the control or predictability for production use. Sure its fun to play with, but without substantial fine controls, its just not useable. My time as a paid designer would be better used in searching adobe stock for the background plate image that matches my scene the closest and knowing intrinsically what I can and cannot fix in photoshop to make it work. AI image generation is currently fine for ethereal fantasy artwork, but as of right now its workflow is not efficient enough for production use. I don’t have 3 days (or even 4 hours) to draft a prompt that MIGHT give me the background plate I need for a scenario when I can go to a stock site and find an image in 10 min, slap it in PS, blur and desaturate it and send for approvals.

Now, I do realize that my stuff is not the prettiest, its not ā€œsneaker on the beachā€ stuff, its very industrial, hard edged 80% good enough images. Most of our stuff is ā€œbetter than what we had, which is nothingā€ type renders. So, AI just might not be for me, but might work great for others. I’m just a bit bummed about all the engineering hours that were spent on this, when they could have been used for other stuff that would have kept KS more on the leading edge of what its good at.

1 Like

In discussions I often compare AI with stock images. For some clients it might work since some can also live with stock images. But stock images will never beat a photographer, and like AI, it’s never exactly what you want. That’s also the reason why AI won’t beat/replace (industrial) designers in the near future, and I actually think that won’t change anytime soon.